Frequently Asked Questions
Questions and Answers
Image: PixaBay 
MODULE 1.0
Questions and Answers
 

Below you will find many questions that have been answered over the course of a decade, while we were researching and developing the ASTRID system. As you can imagine, most of the obvious questions have been asked already (and many times over), and you will find the answers to those. In addition, you will find answers to some less obvious questions as well.

 

This section of the website is actively maintained and new FAQ items are added over time. This might involve new questions we encounter in our daily operations, but also questions from the past decade that we simply didn't get to yet to include here.

Please let us know if you have a question that could or even should be in here. We gladly answer and add it here.

Telegram
LinkedIn
Reddit
© 2024 MIND|CONSTRUCT  
  CONTENT KEY
Organization
Technology
CogSci
Applications
Public Media

Click on the Content Key icons to filter the Frequently Asked Questions 

How do you protect your Intellectual Property (without a patent)

The only real protection against theft of Intellectual Property is complete secrecy. Although this is not exactly simple to implement and manage, it is a viable means of protection.

The first level of protection is using Non Disclosure Agreements (NDA) to legally shield against the unauthorized leaking of confidential information. The people who are (closely) involved with our developments, are under NDA.

One of the most renowned companies that use this method of secrecy, is Coca-Cola: the recipe of their product is not patented, but protected by a very convoluted setup of several people, each having a small part of the information.

MIND|CONSTRUCT is a Deep-Tech company, what does that mean

From the start, the ASTRID-project aimed at developing fundamental technology for bringing fully autonomous decision capabilities to any application domain that needs it.

This basically boils down to the development of Artificial General Intelligence, and as such, this is different from developing 'just' a specific product or service, and this differentiation is what Deep-Tech stands for.

Furthermore, because Deep-Tech stands for 'fundamental' technology, the development is aimed at helping Humanity to progress, by empowering all kinds of existing technology with new possibilities.

Why is MIND|CONSTRUCT not using Artificial Neural Networks (ANN) and Deep Learning

Artificial General Intelligence needs several functionalities that are currently not available, and not even on the horizon of Deep Learning with ANN's (including Large Language Models). Examples of these functionalities are the ability to learn across Knowledge Domains, building a complex world model with Commonsense Knowledge, Causal Reasoning and Unsupervised Learning from Sparse Datasets.

For the development of Artificial General Intelligence it is therefore not a great (or even good) strategy to pursue this with Artificial Neural Networks (ANN) and Deep Learning approaches. It is believed in the AI community that we need a Radical New Idea to build a real AGI, and the ASTRID-system represents such a radical idea.

What makes ASTRID different from other AI approaches

ASTRID is built upon the Symbolic AI paradigm (but with important new additions). It is a long-standing belief that the human brain processes symbolic information, and new discoveries and insights that emerged in Neural Science over the last decade have shown strong evidence for this idea.

Where ASTRID differs vastly from traditional Symbolic AI, is the fact that nobody before came close to solving the self learning problem in Symbolic AI. The basically accepted premise is that Symbolic AI cannot be made self learning, and therefore it failed. By that reasoning, a Symbolic-AI system that is actually capable of self-learning, means that Symbolic-AI didn't 'fail' at all.

The most impressive scientific and technological breakthrough, made by us, is that we solved this exact problem. The ASTRID-system is not only capable of self-learning, it does this completely autonomous and unsupervised.

It is a common belief that Symbolic AI can not be made self-learning

Indeed, many famous AI researchers have tried to solve the problem of self learning in Symbolic AI and have failed enormously. After many decades of research in lots of universities and corporate research facilities, and spending billions of dollars on the problem, it has been basically accepted that it could not be done. After that, Deep Learning stepped in.

However, Deep Learning is (currently) unable to deliver on key requirements for AGI, like managing and integrating Commonsense Knowledge, and Causal Reasoning. This also extends to Large Language Models, like ChatGPT, that are based on Deep Learning. All the while, Symbolic AI already delivered on those requirements in the seventies of the last century.

From this perspective, it seems logical to concentrate on the self learning problem in Symbolic AI, as solving that problem will bring all the benefits of Symbolic AI into a usable scope for building AGI. So, that is precisely what we did: We made Symbolic AI not only self learning, but our system does this really autonomously with non-curated information.

Giving emotions to a machine; isn't that impossible

From the perspective of Systems Theory, it is not impossible, as (digital) machines and human brains are both 'systems'. If you can implement emotion into a human brain -system (and we know nature did that), you should also be able to implement emotion into a digital machine -system.

The big question is 'how to do it?', instead of 'is it possible?'. Here at MIND|CONSTRUCT we obviously solved that problem and, although the details of our solution are intellectual property, we can tell you that implementing emotion into a machine isn't actually that difficult at all.

How did you solve the Symbol Grounding Problem

The Symbol Grounding Problem is believed to be one of the hard problems in Artificial Intelligence. It states that a 'symbol', which is basically a label for something, doesn't have any meaning on its own. Because of that, adding related symbols to a symbol doesn't add anything of value either.

The answer to this concern is both simple and complex at the same time. The simple part is that you have to make the system work in a way that it can add 'value' to a symbol, that has a 'meaning' to that system. The complex part is, of course, how to actually get that done. This is where our technology goes beyond the traditional approaches: the ASTRID system actually values everything it learns (against its internal beliefs and assumptions) and uses those values in its reasoning.

How can the amount of Commonsense knowledge of a human be stored in a machine

The commonly accepted number of words in the vocabulary of a medium-level developed English-speaking person is between 10,000 and 20,000 words. A simple Internet search will reveal that these words contain, on average, between 4 and 5 characters. One character uses one Byte for storage, so a simple calculation shows that storing the vocabulary at the top end of the spectrum (20,000 words) takes approximately 88 Kilobytes. That alone is an incredible small amount of information. It is remarkable that we can describe our reality in significant detail, and communicate about it, based on such a small amount of data.

As the ASTRID system stores its complex world model as words (actually concepts), where each word exists only once in the ASTRID brain (we call this a 'directed graph'). Even with much larger knowledge bases and the additional contextual information of each concept, the storage space needed is nowhere near what is available today at minimal cost.

How does ASTRID learn about the world around us

As ASTRID is based on the Symbolic AI paradigm, or more specifically, the Physical Symbol System Hypothesis. Communication takes the form of symbols, which are in this case words describing concepts. This means that ASTRID can learn about how the world, and our reality, is put together by reading about it. ASTRID learns essentially the same way as we humans tend to learn, by reading books and other forms of information. Verbal communication also falls in this domain.

Small snippets of information, like a single sentence, can tell ASTRID new information that the system processes into its brain (its internal world-model), and then uses in subsequent reasoning. Every possible form of information, even graphical, can be transferred into symbols and processed into the ASTRID brain.

Does the ASTRID system use an existing Commonsense database

Most, if not all, of the companies that claim to have a working AGI system, are using an existing Commonsense database like OpenCyc (which is no longer officially available). This means that the dataset is more or less static and therefor rigid in its capabilities.

The ASTRID system does indeed NOT use any existing Commonsense database. The system is building, and constantly updating its own Commonsense database (its “Brain”) to cater for changing conditions and the handling of new insights. This is why Autonomous Unsupervised learning is the most important capability for any real AGI system, and why it is a core-feature of the ASTRID-system.

Several companies claim to have a working AGI, how do we know your claim is the Real Thing

Over the last few years, several companies have claimed to have a working AGI system. Unfortunately, in most cases, those claims where only substantiated through buzz words without any serious scientific grounds. Sometimes accompanied by shady demonstrations of “the system” (e.g., Hanson Robotics Sophia).

As you can read on our website, a real AGI needs to be able to construct and maintain its internal world model by continuous unsupervised learning. We can demonstrate specifically this capability in the ASTRID system, through our Virtual Brain Scanner, showing this process in Real Time.

Is it safe to have Autonomous Robots in society

We don't have fully autonomous machines yet, but with technology like the ASTRID system, it is imaginable that this might happen sooner than most people anticipate. Eventually, we might want to use such machines in the public space, with fully autonomous vehicles (self-driving cars) being the obvious first application. The question about the safety of such machines is legitimate.

One of the underlying ideas (and science) behind the development of the ASTRID system, is the notion that humans understand other humans because of the shared Commonsense Knowledge. This means that a machine, that shares that same Commonsense Knowledge, should be able to understand how to make things safe for humans.

Asimov's Robot stories explored the problems of having autonomous robots in society. In those stories, humans don't trust sentient robots, and therefore these robots are not allowed in the public space. However, in those stories, there is no evidence of the ability to test or even introspect autonomous AI brains. The question and current discussion about the ability to explain AI-based decisions, points to a future where we actually can trust such systems.

Everyone talks about the danger of Super Intelligence, how do you make it safe

Discussions on the dangers of Super Intelligence are, until now, basically without any reasonable ground. This is because those discussions lack any definition of such a Super Intelligent system, and therefore any behavior attributed to such a system has no scientific foundation of any kind.

Because here at MIND|CONSTRUCT we actually know the definition of a Super Intelligent system, we also know that the dangers many are pointing at, are non-existent in a real AGI system like ASTRID. We do not need to implement convoluted solutions for safety because the ASTRID system is inherently safe by its fundamental design.

Why didn't I hear about MIND|CONSTRUCT before

When the majority of the people who are involved in Artificial Intelligence research believe that it is impossible to already have developed a working AGI system, doing this is an up-hill battle (but we did it). Especially with the incredible hype machine behind Deep Learning, and now Large Language Models, the media that informs the public is hardly interested in anything that goes against the current state of the field.

Although we did get some media attention over the last couple of years, eventually we decided that developing this technology could benefit from “flying under the radar”, known in the start-up world as 'stealth-mode'. This is basically what we did for the last few years, while developing the technology to the level it is now.

Now that the technology has already proved itself in testing and even (public) demonstrations, it is time to let the world know about ASTRID. That is why we now come out of stealth-mode.