Inducive Technology
|
It has been stated many times over, during the last several years: We need a radical new idea to achieve Artificial General Intelligence. Yann LeCun (Chief AI Officer, Facebook) said in many interviews that we not only don't have the technology to build AGI; we don't even have the science yet. So, obviously, we need a radical new idea. Large Language Models (LLM), currently the leading Artificial Intelligence technology, is lacking many key ingredients needed to build a system that is capable of learning directly from experiences. Large Language Models are also unable to reason based on causality, and in the face of uncertainty. Solving these key issues in Large Language Models isn't even on the horizon yet, contrary to what hype want's you to believe. It might even turn out to be impossible to implement these features in LLM-based systems all together. |
Hans Peter once stated, “Simplicity is infinitely more scalable than Complexity”, a statement that is more important than it seems at first. To build a system that can scale to incredible capabilities, its core functionality should be so elementary, that it is actually simple. This is certainly a Radical New Idea in a world where Deep Learning, the technology that powers Large Language Models, is one of the most complex and energy hungry technologies on the planet. At MIND|CONSTRUCT, we not only developed the science, we also built the technology. Because the best way to prove your science, is to implement it into working technology. This technology will be inducive into every technological domain. Together with selected industry partners, we are now in the process of developing possible applications, to be implemented in third-party products and services. |
© 2024 MIND|CONSTRUCT |
Back in the seventies, when Deep Learning and Large Language Models weren't invented yet (although Neural Networks were), Symbolic Artificial Intelligence was the leading technology in the field, and it held lots of promise for the future of AI. It was building on decades of scientific research and already found its way into serious applications, by that time mainly known as Expert Systems. Many Expert Systems were incredibly successful, and some of those are still in use today. Some of the most impressive feats that those systems already could perform were Causal Reasoning and handling of (symbolic) information to describe the world in detail. The only real hurdle that they needed to take to get to General Intelligence was to somehow get these systems to acquire their information in an automated way. Not only because typing in all this information manually takes even a large group of people decades, but also because for real Artificial General Intelligence, you need the systems to be able to update its information constantly. |
The problem of self-learning somehow appeared to be much harder than initially thought. When eventually Deep Learning came along, mainly made possible by the advent of the Internet and Social Media delivering the Big Data required, the field basically dropped Symbolic AI, and declared making it self-learning to be impossible. The Symbolic approach had (and still has) so many advantages over other systems (being mainly Deep Learning) that it is strange that we dropped all that promise of high-level reasoning and Symbolic brain models (that now has proven to be eerily close to how our human brains store information) for a system that is only somewhat trainable but lacks everything else. Although it is today almost publicly ridiculed to choose Symbolic AI as the basis for development, it is more logical to solve just the self-learning problem for Symbolic-AI (and get everything else for free) than to try to solve everything else in Deep Learning. And, of course, that is what we did. |
© 2024 MIND|CONSTRUCT |
Deep Learning is currently the prevalent technology in Artificial Intelligence R&D. It is the underlying technology that powers Large Language Models like ChatGPT. From that fact alone, it appears that this is the way forward towards higher levels of intelligence in machines. However, Deep Learning and Artificial Neural Networks, the underlying scientific paradigm, are handicapped by several disadvantages and other issues. The first and very visible issue is the fact that Deep Learning needs large amounts of CPU power for training. This is not only very costly regarding the incredible amounts of server hardware required, but also regarding power consumption. Both the power issue and the space required for all those machines, make it impossible to implement such a system in any usable way into an autonomous mobile solution like Robots or self-driving cars. |
The second obvious problem is the fact that Deep Learning requires large datasets (Big Data) for each and everything it needs to learn. As soon as a small fragment of the learned pattern changes, it requires a new dataset and retraining. In addition to this issue, a Deep Learning system can only be trained for one purpose at the time. It is impossible to train a system to be good at both playing Chess and recognizing Cats in pictures. This rigidness is one of the prevalent obstacles to building AGI with Deep Learning technology. Finally, Deep Learning, including Large Language Models, currently lacks the capability of integrating knowledge into a complex world view. Instead of enriching the knowledge about a concept, needed for Commonsense Knowledge Acquisition, Deep Learning does the opposite and brings a concept down to the most basic pattern. |
© 2024 MIND|CONSTRUCT |