Frequently Asked Questions
Questions and Answers
Image: PixaBay 
MODULE 1.0
Questions and Answers
 

Below you will find the questions that have been answered over the course of a decade while Hans Peter Willems was researching and developing the ASTRID system. As you can imagine, most of the obvious questions have been asked already (and many times over) and you will find the answers to those. In addition, you will find answers to some less obvious questions as well.

 

This section of the website is actively maintained and new FAQ items are added over time. This might involve new questions we encounter in our daily operations, but also questions from the past decade that we simply didn't get to yet to include here.

Please let us know if you have a question that could or even should be in here. We gladly answer and add it here.

© 2021 MIND|CONSTRUCT  
  CONTENT KEY
Organization
Development
Science
Funding
Applications
Media

Click on the Content Key icons to filter the Frequently Asked Questions 

Are there any public ASTRID demonstrations that I can visit

At least once a year MIND|CONSTRUCT organizes public days at our headquarters. Those are primarily aimed at shareholders and relatives, but interested parties can get an invitation as well.

We also get requests from business societies and guilds for presentations and demonstrations, that we gladly host to show our technology in action.

If you like any of these options, just contact us, and we'll see what we can do.

Why is MIND|CONSTRUCT not using Artificial Neural Networks (ANN) and Deep Learning

Artificial General Intelligence needs several functionalities that are currently not available, and not even on the horizon of Deep Learning with ANN's. Examples of these functionalities are the ability to learn across Knowledge Domains, building a complex world model with Commonsense Knowledge, Causal Reasoning and Unsupervised Learning from Sparse Datasets.

For the development of Artificial General Intelligence it is therefore not a great (or even good) strategy to pursue this with Artificial Neural Networks (ANN) and Deep Learning approaches. It is believed in the AI community that we need a Radical New Idea to build a real AGI.

What makes ASTRID different from other AI approaches

ASTRID is built upon the Symbolic AI paradigm (but with important new additions). It is a long-standing belief that the human brain processes symbolic information and new discoveries and insights that emerged in Neural Science over the last decade has shown strong evidence for this idea.

Where ASTRID differs vastly from traditional Symbolic AI, is the fact that nobody before came close to solving the self learning problem in Symbolic AI. The basically accepted premise is that Symbolic AI cannot be made self learning. However, the most impressive breakthrough made by Hans Peter is that he solved this exact problem.

It is a common belief that Symbolic AI can not be made self-learning

Indeed, many famous AI researchers have tried to solve the problem of self learning in Symbolic AI and have failed enormously. After many decades of research in lots of universities and corporate research facilities, and spending billions of dollars on the problem, it has been basically accepted that it could not be done. After that, Deep Learning stepped in.

However, Deep Learning/Machine Learning is (currently) unable to deliver on key requirements for AGI like managing and integrating Commonsense Knowledge and Causal Reasoning, while Symbolic AI already delivered on those requirements in the seventies of the last century.

From this perspective it seems logical to tackle the self learning problem in Symbolic AI, as solving that problem will bring all the benefits of Symbolic AI into scope for building AGI. So, that is exactly what we did: We made Symbolic AI not only self learning, but our system does this really autonomously with non-curated information. This is a feat that even Deep Learning still has to conquer.

Giving emotions to a machine; isn't that impossible

From the perspective of Systems Theory it is not impossible, as (digital) machines and human brains are both 'systems'. If you can implement emotion into a human brain -system (and we know nature did that), you should also be able to implement emotion into a digital machine -system.

The big question is 'how to do it?', not 'is it possible?'. Here at MIND|CONSTRUCT we obviously solved that problem and, although the details of our solution are classified, we can tell you that implementing emotion into a machine isn't actually that difficult at all.

How did you solve the Symbol Grounding Problem

The Symbol Grounding Problem is deemed one of the hard problems in Artificial Intelligence. It states that a 'symbol', which is basically a label for something, doesn't have any meaning on its own. Because of that, adding related symbols to a symbol doesn't add anything of value either.

The answer to this problem is both simple and complex at the same time. The simple part is that you have to make the system work in a way that it can add 'value' to a symbol, that has a 'meaning' to that system. The complex part is of course how to actually get that done. This is where our technology goes beyond the traditional approaches: the ASTRID system actually values everything it learns and uses those values in its reasoning.

How can the amount of Commonsense knowledge of a human be stored in a machine

The commonly accepted number of words in the vocabulary of a medium developed English-speaking person is between 10.000 and 20.000 words. A simple Internet search will reveal that these words contain on average between 4 and 5 characters. One character uses one Byte for storage, so a simple calculation shows that storing the vocabulary at the top end of the spectrum (20.000 words) takes approximately 88 Kilobytes. That alone is an incredible small amount of information.

As the ASTRID system stores its complex world model as words (actually concepts) where each word exists only once in the ASTRID brain (we call this a 'directed graph'), even with much larger knowledge bases and the additional contextual information of each concept, the storage space needed is nowhere near what is available today for close to nothing.

How does ASTRID learn about the world around us

As ASTRID is based on the Symbolic AI paradigm, communication takes the form of symbols, which are in this case words describing concepts. This means that ASTRID can learn about how the world, and our reality, is put together by reading books. ASTRID learns essentially the same way as we humans tend to learn by reading books, but much faster than we humans are capable of.

Small snippets of information, like a single sentence, can tell ASTRID new information that the system processes into its brain and then uses in subsequent reasoning. Every possible form of information, even graphical, can be transferred into symbols and processed into the ASTRID brain.

Can ASTRID tackle the Turing Test

The prerequisites for tackling the Turing test are certainly present in the ASTRID system. As soon as the Deep Inference Engine is fully operational, ASTRID will have the final piece of the puzzle.

ASTRID has to be capable of reacting in a 'non-programmatic way' to convince a juror. The system has this capability because there are no predefined answers in the ASTRID brain, only Commonsense Knowledge that is used to formulate the appropriate response. The Deep Inference Engine formulates the response based on pure cognition, basically the same way as humans do it.

Does the ASTRID system use an existing Commonsense database

Most, if not all, of the companies that claim to have a working AGI system, are using an existing Commonsense database like OpenCyc (which is no longer officially available). This means that the dataset is more or less static and therefor rigid in its capabilities.

The ASTRID system does indeed NOT use any existing Commonsense database. The system is building, and constantly updating its own Commonsense database (its “Brain”) to cater for changing conditions and the handling of new insights. This is why Autonomous Unsupervised learning is the most important capability for any real AGI system.

Several companies claim to have a working AGI, how do we know your claim is the Real Thing

Over the last few years several companies have claimed to have a working AGI system. Unfortunately, in most cases those claims where only substantiated through buzz words without any serious scientific grounds. Sometimes accompanied by shady demonstrations of “the system” (e.g., Hanson Robotics Sophia).

As you can read on our website, real AGI needs to be able to construct and maintain its internal world model by continuous unsupervised learning. We can demonstrate specifically this capability in the ASTRID system, through our Virtual Brain Scanner, showing this process in Real Time.

I'm a small investor, can I invest in your company

MIND|CONSTRUCT is currently a privately held company. Our shares are not (yet) tradable, so buying small amounts of shares is only possible when we do a private funding round. We had several funding rounds over the last few years but currently there are no rounds planned where you can buy small amounts of shares.

We are however in the process of finding and securing a larger investment to fund the next phase of both the ASTRID project and the company. Depending on how large your possible investment would be, we might decide to sell shares to you. If you are interested to make an investment with us, do not hesitate to contact us.

Can I buy a ASTRID system

The ASTRID system, on its own, is not a standalone product. ASTRID is embeddable technology, meant to be integrated into machines and applications that need a high level of autonomous reasoning. Therefor it is not possible to buy a "ASTRID system" but it is possible to license the ASTRID technology for implementation into end-user products.

In the coming years MIND|CONSTRUCT will not only work with outside developers and system integrators, but we will also develop several specific end-user applications in-house. The scope of those developments will focus mainly on systems that are aimed to solve very complex problems and need the specific high-load training facilities that we developed earlier for the initial training of the ASTRID base system.

Is it safe to have Autonomous Robots in society

We don't have fully autonomous machines yet, but with technology like the ASTRID system it is imaginable that this might happen sooner than most people anticipate. Eventually, we might want to use such machines in the public space, with fully autonomous vehicles being the obvious first application. The question about safety of such machines is a legitimate one.

One of the underlying ideas (and science) behind the development of the ASTRID system is the notion that humans understand other humans because of the shared Commonsense Knowledge. This means that a machine that shares that same Commonsense Knowledge should be able to understand how to make things safe for humans.

Asimov's Robot stories explored the problems of having autonomous robots in society. In those stories, humans don't trust sentient robots and therefor they are not allowed in the public space. However, in those stories there is no evidence of the ability to test or even introspect autonomous AI brains. The question and current discussion  about the ability to explain AI-based decisions points to a future where we actually can trust such systems.

Everyone talks about the danger of Super Intelligence, how do you make it safe

Discussions on the dangers of Super Intelligence are, up till now, basically without any reasonable ground. This is because those discussions lack any definition of such a Super Intelligent system and therefor any behavior attributed to such a system has no scientific foundation of any kind.

Because we at MIND|CONSTRUCT actually know the definition of a Super Intelligent system, we also know that the dangers many are pointing at, are non-existent in a real AGI system like ASTRID. We do not need to implement convoluted solutions for safety because the ASTRID system is inherently safe by its fundamental design.

Why didn't I hear about MIND|CONSTRUCT before

When the majority of the people who are involved in Artificial Intelligence research believe that it is impossible to already have developed a working AGI system, doing this is an up-hill battle (but we did it). Especially with the incredible hype machine behind Deep Learning, the media that informs the public is not interested in anything that goes against the current state of the field.

Although we did get some minor media attention over the last couple of years, eventually we decided that developing this technology could benefit from “flying under the radar”. This is basically what we did for the last few years, while developing the technology to the level it is now.

Now that the technology has already proved itself in testing and even (public) demonstrations, it is time to let the world know about ASTRID. That is why we no longer “fly under the radar”.