Frequently Asked Questions
Questions and Answers
Image: PixaBay 
Questions and Answers

Below you will find the questions that have been answered over the course of a decade while Hans Peter Willems was researching and developing the ASTRID system. As you can imagine, most of the obvious questions have been asked already (and many times over) and you will find the answers to those. In addition, you will find answers to some less obvious questions as well.


This section of the website is actively maintained and new FAQ items are added over time. This might involve new questions we encounter in our daily operations, but also questions from the past decade that we simply didn't get to yet to include here.

Please let us know if you have a question that could or even should be in here. We gladly answer and add it here.

© 2021-2023 MIND|CONSTRUCT  
Public Media

Click on the Content Key icons to filter the Frequently Asked Questions 

Why is MIND|CONSTRUCT not using Artificial Neural Networks (ANN) and Deep Learning

Artificial General Intelligence needs several functionalities that are currently not available, and not even on the horizon of Deep Learning with ANN's. Examples of these functionalities are the ability to learn across Knowledge Domains, building a complex world model with Commonsense Knowledge, Causal Reasoning and Unsupervised Learning from Sparse Datasets.

For the development of Artificial General Intelligence it is therefore not a great (or even good) strategy to pursue this with Artificial Neural Networks (ANN) and Deep Learning approaches. It is believed in the AI community that we need a Radical New Idea to build a real AGI.

What makes ASTRID different from other AI approaches

ASTRID is built upon the Symbolic AI paradigm (but with important new additions). It is a long-standing belief that the human brain processes symbolic information and new discoveries and insights that emerged in Neural Science over the last decade has shown strong evidence for this idea.

Where ASTRID differs vastly from traditional Symbolic AI, is the fact that nobody before came close to solving the self learning problem in Symbolic AI. The basically accepted premise is that Symbolic AI cannot be made self learning. However, the most impressive breakthrough made by Hans Peter is that he solved this exact problem.

It is a common belief that Symbolic AI can not be made self-learning

Indeed, many famous AI researchers have tried to solve the problem of self learning in Symbolic AI and have failed enormously. After many decades of research in lots of universities and corporate research facilities, and spending billions of dollars on the problem, it has been basically accepted that it could not be done. After that, Deep Learning stepped in.

However, Deep Learning/Machine Learning is (currently) unable to deliver on key requirements for AGI like managing and integrating Commonsense Knowledge and Causal Reasoning, while Symbolic AI already delivered on those requirements in the seventies of the last century.

From this perspective it seems logical to tackle the self learning problem in Symbolic AI, as solving that problem will bring all the benefits of Symbolic AI into scope for building AGI. So, that is exactly what we did: We made Symbolic AI not only self learning, but our system does this really autonomously with non-curated information. This is a feat that even Deep Learning still has to conquer.

Giving emotions to a machine; isn't that impossible

From the perspective of Systems Theory it is not impossible, as (digital) machines and human brains are both 'systems'. If you can implement emotion into a human brain -system (and we know nature did that), you should also be able to implement emotion into a digital machine -system.

The big question is 'how to do it?', not 'is it possible?'. Here at MIND|CONSTRUCT we obviously solved that problem and, although the details of our solution are classified, we can tell you that implementing emotion into a machine isn't actually that difficult at all.

How did you solve the Symbol Grounding Problem

The Symbol Grounding Problem is deemed one of the hard problems in Artificial Intelligence. It states that a 'symbol', which is basically a label for something, doesn't have any meaning on its own. Because of that, adding related symbols to a symbol doesn't add anything of value either.

The answer to this problem is both simple and complex at the same time. The simple part is that you have to make the system work in a way that it can add 'value' to a symbol, that has a 'meaning' to that system. The complex part is of course how to actually get that done. This is where our technology goes beyond the traditional approaches: the ASTRID system actually values everything it learns and uses those values in its reasoning.

How can the amount of Commonsense knowledge of a human be stored in a machine

The commonly accepted number of words in the vocabulary of a medium developed English-speaking person is between 10.000 and 20.000 words. A simple Internet search will reveal that these words contain on average between 4 and 5 characters. One character uses one Byte for storage, so a simple calculation shows that storing the vocabulary at the top end of the spectrum (20.000 words) takes approximately 88 Kilobytes. That alone is an incredible small amount of information.

As the ASTRID system stores its complex world model as words (actually concepts) where each word exists only once in the ASTRID brain (we call this a 'directed graph'), even with much larger knowledge bases and the additional contextual information of each concept, the storage space needed is nowhere near what is available today for close to nothing.

How does ASTRID learn about the world around us

As ASTRID is based on the Symbolic AI paradigm, communication takes the form of symbols, which are in this case words describing concepts. This means that ASTRID can learn about how the world, and our reality, is put together by reading books. ASTRID learns essentially the same way as we humans tend to learn by reading books, but much faster than we humans are capable of.

Small snippets of information, like a single sentence, can tell ASTRID new information that the system processes into its brain and then uses in subsequent reasoning. Every possible form of information, even graphical, can be transferred into symbols and processed into the ASTRID brain.

Can ASTRID tackle the Turing Test

The prerequisites for tackling the Turing test are certainly present in the ASTRID system. As soon as the Deep Inference Engine is fully operational, ASTRID will have the final piece of the puzzle.

ASTRID has to be capable of reacting in a 'non-programmatic way' to convince a juror. The system has this capability because there are no predefined answers in the ASTRID brain, only Commonsense Knowledge that is used to formulate the appropriate response. The Deep Inference Engine formulates the response based on pure cognition, basically the same way as humans do it.

Does the ASTRID system use an existing Commonsense database

Most, if not all, of the companies that claim to have a working AGI system, are using an existing Commonsense database like OpenCyc (which is no longer officially available). This means that the dataset is more or less static and therefor rigid in its capabilities.

The ASTRID system does indeed NOT use any existing Commonsense database. The system is building, and constantly updating its own Commonsense database (its “Brain”) to cater for changing conditions and the handling of new insights. This is why Autonomous Unsupervised learning is the most important capability for any real AGI system, and why it is a core-feature of the ASTRID-system.

Several companies claim to have a working AGI, how do we know your claim is the Real Thing

Over the last few years several companies have claimed to have a working AGI system. Unfortunately, in most cases those claims where only substantiated through buzz words without any serious scientific grounds. Sometimes accompanied by shady demonstrations of “the system” (e.g., Hanson Robotics Sophia).

As you can read on our website, real AGI needs to be able to construct and maintain its internal world model by continuous unsupervised learning. We can demonstrate specifically this capability in the ASTRID system, through our Virtual Brain Scanner, showing this process in Real Time.