Personal Blog
A new classification for AI systems
A new classification for AI systems

Published: 2023-10-04 in CogSci || Author: Hans Peter Willems - CEO MIND|CONSTRUCT

Artificial Intelligence needs a new system of classification, as the current terms like Strong-AI, Weak-AI, and even Artificial General Intelligence (AGI) have lost their meaning.

Introduction

Classification of Artificial Intelligence started out with a rather simple division: Weak AI and Strong AI. Weak AI was what we already had, and that didn’t look like actual human-like intelligence and capabilities. Strong AI was what should look and act like human intelligence and competences, but what we didn’t have yet. And it was expected that obtaining strong AI would be far out into the future. From there, Weak AI has become synonymous with the things we have that looks like AI but isn’t deemed to be AI anymore. Strong AI has become the designation for the kind of AI we won’t have for decades to come, or ever. Basically, these terms have lost their usefulness.

Subsequently, we got a new division based on the previous one. Weak AI became just ‘AI’, and Strong AI became ‘AGI’, which stands for Artificial General Intelligence. The idea is that the ‘general’ in AGI points to the capability to handle general concepts, to be able to generalize across application domains, much like what humans are capable of. Basic AI is anything that looks like AI, but lacks this ‘general’ capability. However, these categorizations, both the old one and the more recent one, are problematic.

The first problem is the fact that these categories only say what it should look like on the outside. There is no definition given about which capabilities should actually be there to cater for the effects to be observed. The second concern is that these categories do not define the outside of the boundary. It doesn’t say when something is not falling within any of those (only two) categories. These two issues have resulted in many faulty classifications of all kinds of software systems. Most systems that are now classified as Artificial Intelligence are just static algorithms that process statistical data. Although these systems perform impressive feats at first glance, they do nothing that is even remotely looking like human cognition, and on closer inspection their many shortcomings become obvious.

A better classification

To solve this misclassification, I propose a categorization based on actual implemented competences in software systems. From this perspective, we can identify the following categories: Functional systems, Smart systems, Intelligent systems, and Conscious systems.

Functional systems: predefined

This first category defines the outside boundary, or what is NOT deemed AI in any form. These systems perform previously defined tasks, and follow previously defined algorithmic paths to execute these tasks. There is no option to deviate from these predefined tasks and algorithms. Almost all the software systems we have and use today are in this category. Functional systems are predefined.

As it is impossible to have software without predefined algorithms, every subsequent category does contain predefined algorithms, but adds capabilities that are absent in the previous categories.

Smart systems: reactive

In addition to the basic algorithmic structures that Functional systems have, Smart systems use new actual data (inputs) to tune the algorithmic functions of the system. This means that with different inputs, the system can generate different outcomes. Smart systems are ‘reactive’.

Deep Learning systems, and the Large Language Models like ChatGPT that are built on top of these systems, fall into this category. These systems are not ‘intelligent’ because there is no functionality or capability implemented in these systems to perform any form of intelligent or cognitive behavior. There is no understanding of concepts or internal deliberation of possible outcomes.

 

Intelligent systems: adaptive

On top of the capabilities of Smart systems, these systems maintain a continuously updated model of the real world, also known as commonsense knowledge. Additionally, they can use new data to tune their internal algorithms that they use to reason about their worldview and the questions at hand. Intelligent systems are therefore ‘adaptive’.

This category overlaps largely with what is commonly known as Cognitive Architectures, and comes close to what initially was meant for the term AGI to describe. The ASTRID system, developed by MIND|CONSTRUCT, falls squarely in this category.

Conscious systems: evolving

These are systems that have internal states that are both dependent on previous internal states (like nature, nurture, previous experiences), and currently evolving external data (inputs). They have awareness of their current internal states and the adaptation of those states based on inputs. They use this awareness to tune their cognitive processes, which can in turn result in further adjustments to the internal states. This basically describes (internal) reactions to experiences, and this is how systems can evolve beyond initial programming and training. Conscious systems are therefor ‘evolving’.

The ASTRID system is crossing over into this category. It has internal states that are constantly adjusted as new data comes into the system. These dynamically changing internal states are actively tuning the cognitive processes, and we can therefor state that the system has awareness of the internal states.

Conclusion

Deep Learning systems and Large Language Models like ChatGPT have been designated as Artificial Intelligence mainly because they seem to do something intelligent on the outside. But how something looks on the outside is no guarantee of it actually being what it looks like. So far, that is just akin to a magician’s trick. To know what is really going on, we need to look inside. This proposed classification gives us a point of reference to do just that.

In the case of Large Language Models, this categorization shows that they are not intelligent. At least not in a way that would put them in the class of ‘Intelligent systems’. This classification system also shows that, to get to the class of Intelligent systems, there are requirements that are hard, or even impossible, to meet for Large Language Models. It shows that for a system to be really intelligent, it needs to be able to reason about every facet of reality. It means it has to implement the capability to maintain an actual and up-to-date model of reality, and integrate this model continuously into its reasoning process. As it stands now, Deep Learning and Large Language Models won’t get us there.

The seemingly radical alternative (no Deep Learning), that is represented by the ASTRID system, has all the capabilities that are needed to get even to the level of Conscious systems. ASTRID is therefore a unique approach to actually get to the level of real Artificial Intelligence.

 

About the author:

Hans Peter is the founder and CEO of MIND|CONSTRUCT. He has experience with more than 20 programming languages, has written more than a million lines of code, has over 30 years of experience in IT and broad expertise in software engineering, software quality assurance, project management and business process engineering. Besides this, Hans Peter has been a serial entrepreneur for more than 30 years now, has worked as a business coach and consultant on many projects and has worked several years as a teacher in the domain of software development.

Telegram
LinkedIn
Reddit
© 2024 MIND|CONSTRUCT  
Other Articles in CogSci
 
  Blogs
  • 2024-08-22 - Large Language Models and typing monkeys - Hans Peter Willems - CEO MIND|CONSTRUCT 
  • 2018-01-24 - Breakthrough in common-sense knowledge - Dr. Dr. Ir. Gerard Jagers op Akkerhuis 

 
  News
  • 2021-03-30 - New research Paper - ASTRID: Bootstrapping Commonsense Knowledge 
  • 2021-03-12 - New research paper - Self-learning Symbolic AI: A critical appraisal 
  • 2020-03-23 - ASTRID reaches 50.000 concepts learned 
  • 2020-02-14 - ASTRID's Deep Inference Engine handles analogous information 
  • 2019-01-20 - Fuzzy Semantic Relations in the ASTRID system 
  • 2012-05-09 - Research paper online: Why we need 'Conscious Artificial Intelligence'