Frequently Asked Questions
|
Below you will find many questions that have been answered over the course of a decade, while we were researching and developing the ASTRID system. As you can imagine, most of the obvious questions have been asked already (and many times over), and you will find the answers to those. In addition, you will find answers to some less obvious questions as well. |
This section of the website is actively maintained and new FAQ items are added over time. This might involve new questions we encounter in our daily operations, but also questions from the past decade that we simply didn't get to yet to include here. Please let us know if you have a question that could or even should be in here. We gladly answer and add it here. |
© 2024 MIND|CONSTRUCT |
Organization | Technology | CogSci | Applications | Public Media |
Click on the Content Key icons to filter the Frequently Asked Questions |
Is it safe to have Autonomous Robots in society | We don't have fully autonomous machines yet, but with technology like the ASTRID system, it is imaginable that this might happen sooner than most people anticipate. Eventually, we might want to use such machines in the public space, with fully autonomous vehicles (self-driving cars) being the obvious first application. The question about the safety of such machines is legitimate. One of the underlying ideas (and science) behind the development of the ASTRID system, is the notion that humans understand other humans because of the shared Commonsense Knowledge. This means that a machine, that shares that same Commonsense Knowledge, should be able to understand how to make things safe for humans. Asimov's Robot stories explored the problems of having autonomous robots in society. In those stories, humans don't trust sentient robots, and therefore these robots are not allowed in the public space. However, in those stories, there is no evidence of the ability to test or even introspect autonomous AI brains. The question and current discussion about the ability to explain AI-based decisions, points to a future where we actually can trust such systems. |
Everyone talks about the danger of Super Intelligence, how do you make it safe | Discussions on the dangers of Super Intelligence are, until now, basically without any reasonable ground. This is because those discussions lack any definition of such a Super Intelligent system, and therefore any behavior attributed to such a system has no scientific foundation of any kind. Because here at MIND|CONSTRUCT we actually know the definition of a Super Intelligent system, we also know that the dangers many are pointing at, are non-existent in a real AGI system like ASTRID. We do not need to implement convoluted solutions for safety because the ASTRID system is inherently safe by its fundamental design. |