News Article
New research paper - Self-learning Symbolic AI: A critical appraisal
New research paper - Self-learning Symbolic AI: A critical appraisal

Published: 2021-03-12 in CogSci

Hans Peter published a new research paper "Self-learning Symbolic AI: A critical appraisal", as an answer to Gary Marcus’s paper "Deep Learning: A critical appraisal".

Abstract from the paper:

In the current day and age, it is easy to forget that the scientific field of Artificial Intelligence was largely built upon the notion of “symbol manipulation”. As Deep Learning is now the prevalent technology for AI, we have first stepped away from all the accomplishments in Symbolic AI, only to circle back to those accomplishments but now state them as problem areas that we need to solve with Deep Learning and other forms of Artificial Neural Nets.

The level of hype, that contributed to the popularity of Deep Learning and related approaches, has also had a disastrous effect on the availability of funding for symbolic oriented projects. For those working in the field of Symbolic AI, the new AI winter has already started several years ago. Funding for Symbolic AI is close to nonexistent, while Deep Learning is “where the money is”, resulting in a focus that might never yield the results we are after and ignoring technological advances that were made already, some even decades ago.

I present a critical appraisal of Symbolic AI approaches, relating to the current limitations in Deep Learning, and will show that those limitations can be solved, and in many cases already have been solved, by symbolic approaches.

 

Click here to read/download it (opens a new window/tab).

 

NOTE: This news-article is presented here for historical perspective only.

This article is more than two years old. Therefore, information in this article might have changed, become incomplete, or even completely invalid since its publication date. Included weblinks (if present in this article) might point to pages that no longer exist, have been moved over time, or now contain unrelated or insufficient information. No expectations or conclusions should be derived from this article or any forward-looking statements therein.

Telegram
LinkedIn
Reddit
© 2024 MIND|CONSTRUCT  
Other Articles in CogSci
 
  News
  • 2021-03-30 - New research Paper - ASTRID: Bootstrapping Commonsense Knowledge 
  • 2020-03-23 - ASTRID reaches 50.000 concepts learned 
  • 2020-02-14 - ASTRID's Deep Inference Engine handles analogous information 
  • 2019-01-20 - Fuzzy Semantic Relations in the ASTRID system 
  • 2012-05-09 - Research paper online: Why we need 'Conscious Artificial Intelligence'  

 
  Blogs
  • 2024-08-22 - Large Language Models and typing monkeys - Hans Peter Willems - CEO MIND|CONSTRUCT 
  • 2023-10-04 - A new classification for AI systems - Hans Peter Willems - CEO MIND|CONSTRUCT 
  • 2018-01-24 - Breakthrough in common-sense knowledge - Dr. Dr. Ir. Gerard Jagers op Akkerhuis