Dependable Autonomy
|
There is a lot of discussion (and speculation) about the safety, or lack thereof, of Artificial Intelligence. As Large Language Models are currently the dominant AI technology, these concerns are linked to that technology. One of the problems with Deep Learning-based systems is that they have no real possibility for introspection. It is very difficult to determine why or how a Deep Learning-based system 'decides' something. The ASTRID technology is radically different from Deep Learning in this respect: It uses Natural Language to learn about the world and also for creating its internal world model. That means that we can actually inspect the internal world model and even 'repair' things if needed. Within the ASTRID project, we've developed the tools to do 'full brain introspection' and additional analysis of the conceptual constructs therein. |
In addition to the 'readable' brain, the ASTRID system has the Emotional Bias Response Model (EBRM) that governs all the system's actions, regardless of them being intentional or driven by insights that the machine uncovered. This is basically a form of Fundamental Intention Modelling that makes sure that the machine will always question the validity, morality, and impact of its actions (and those of others). Instead of us humans making sure that the machine behaves safely, it will constantly determine itself if it is doing so, and taking corrective action if needed. |
© 2024 MIND|CONSTRUCT |
Computers can't really understand anything…. Until now. When we discuss Artificial Intelligence, we tend to use terms like 'understanding' and 'intention', while in reality there is none of both in the machine. Computers run programs that consist of rules that are checked and when a rule fires, the associated action is started and something gets done. The machine doesn't have to understand anything, as it is just executing the commands. The ASTRID system is also a computer program that consists of rules and actions, like any other computer program. But there is an intrinsic difference between ASTRID and traditional software: The ASTRID system works by building conceptual understanding. |
Traditional software has rules that fire based on specific states in our world, and actions that act directly on those states in our world. The ASTRID system has only rules and actions that help to build a model of the world, the system has to figure out what the rules and actions are to react to this model of the world. Like humans, when the internal model differs from the reality, this either results in changing the internal model (something is learned), or an action is undertaken to bring reality in sync with the internal model (trying to achieve a goal). This is obviously only possible in a machine when you have an ASTRID-like system that is capable of directly learning from new input, and using this new knowledge directly for interaction with both its internal world-model and the real world. |
© 2024 MIND|CONSTRUCT |
A COMPLEX WORLD VIEW |
LEARN FROM EXPERIENCE |
HANDLE ANALOGIES |
A fully autonomous system needs to understand the world at fundamental levels of reality. We know that humans have an internal world model that constantly gets updated by new experiences and insights. This is known as Commonsense Knowledge. Humans also use their internal representation of the world to steer actions in the real world. Therefore, having an internal representation of the world that gets updated dynamically, is a fundamental requirement for full autonomy. |
Building an internal world model can only be achieved by continuous incremental learning, as it is impossible to build such a representation manually. And even if that would be possible, the resulting model would be very rigid and lack any dynamic properties. Machine Learning based on Neural Networks (known as Deep Learning) also won't work, as training such a system for each small change in the environment would obviously take too long. Besides that, procuring big datasets for each small change is factually impossible. |
As with us humans, an internal world model can encounter situations that are entirely new. Without prior Commonsense Knowledge, the only way to instantly handle such situations is by using analogous information to make useful inferences in the face of uncertainty. Within the ASTRID-system, the capacity to handle analogies scales proportional with the amount of trained/learned Commonsense Knowledge about the world. |
© 2024 MIND|CONSTRUCT |