News Article
Fuzzy Semantic Relations in the ASTRID system
Fuzzy Semantic Relations in the ASTRID system

Published: 2019-01-20 in CogSci

A recent breakthrough in the ASTRID project makes it possible for the system to recognize fuzzy and esoteric semantic relations in training data.

 

A new discovery in the semantic model that powers the ASTRID system, has made it possible for the system to recognize fuzzy defined predicates. The so-called predicates describe the (logical) relations between the concepts that describe ASTRID's internal world-model. The predicates make it possible for the system to reason about the world in structural, causal and temporal contexts.

The original basis for this kind of reasoning is called 'predicate logic'. The predicates are traditionally logical (hence the name), and therefore always true or false. This is the only way to do 'reasoning' in a symbolic rule-based system that lacks common-sense knowledge.

Traditionally, predicates are also pre-defined in systems that use predicate logic. ASTRID was already capable of finding the predicates in the training data, without pre-defined lists of predicates, but now those predicates don't even have to be 'logically' true or false.

 

Humans understand that reality is not inherently true or false. Obviously, there are things that are either true or false, but many things are not black and white like that. Some things are mostly true, but sometimes false. Other things are 'somewhat' true in a certain context, but also somewhat false. Traditional predicate logic cannot deal with concept like 'sometimes', 'mostly', 'somewhat', 'seldom', and other fuzzy determinations like that.

In the past, the solution for this was called 'reasoning with uncertainty'. Systems and models like 'fuzzy logic' and 'Bayesian logic' were invented to handle this. The problem with these approaches is that the 'fuzzy' part doesn't have any real meaning. It is just a calculated value that doesn't relate to anything. The ASTRID system, on the other hand, can now actually learn, for example, that 'seldom' means 'once a month' in one context, and 'once every century' in another context. 

 

NOTE: This news-article is presented here for historical perspective only.

This article is more than two years old. Therefore, information in this article might have changed, become incomplete, or even completely invalid since its publication date. Included weblinks (if present in this article) might point to pages that no longer exist, have been moved over time, or now contain unrelated or insufficient information. No expectations or conclusions should be derived from this article or any forward-looking statements therein.

Telegram
LinkedIn
Reddit
© 2024 MIND|CONSTRUCT  
Other Articles in CogSci
 
  News
  • 2021-03-30 - New research Paper - ASTRID: Bootstrapping Commonsense Knowledge 
  • 2021-03-12 - New research paper - Self-learning Symbolic AI: A critical appraisal 
  • 2020-03-23 - ASTRID reaches 50.000 concepts learned 
  • 2020-02-14 - ASTRID's Deep Inference Engine handles analogous information 
  • 2012-05-09 - Research paper online: Why we need 'Conscious Artificial Intelligence'  

 
  Blogs
  • 2024-08-22 - Large Language Models and typing monkeys - Hans Peter Willems - CEO MIND|CONSTRUCT 
  • 2023-10-04 - A new classification for AI systems - Hans Peter Willems - CEO MIND|CONSTRUCT 
  • 2018-01-24 - Breakthrough in common-sense knowledge - Dr. Dr. Ir. Gerard Jagers op Akkerhuis