Constructing the future of (digital) minds Home | Contact 
MIND|CONSTRUCT
News
Weblog - Hans Peter: Super Intelligence won't kill us!
Posted: 2015-06-08

Lately there's an abundance of media exposure covering the upcoming demise of the human race, brought to us by the impeding doom of Artificial Super Intelligence. The choir is being led by some pretty famous (and obviously intelligent) people like Stephen Hawking, Elon Musk and Bill Gates, just to name a few. However, there is an abundance of problems with both the presented viewpoints of afore mentioned ambassadors of AI-doom and equally the way that the press seems to pick up on those things and hype stuff they basically don't understand in the first place. Here's why...

The first and most obvious problem with the current press coverage of the AGI (Artificial General Intelligence) field of development is that there is almost NO actual AGI field of development. Although large corporations like IBM (Watson), Google (Deep Learning) and others like to paint the picture that they are really 'getting somewhere', the opposite is actually true; The stuff that those companies work on is only a small amount above the results that have been accomplished over the last many decades, and are therefore still at the level of 'pretty smart software' but nowhere near anything that resembles even the intelligence and capabilities of a little child. IBM's Watson is basically a very good search engine without any real cognitive abilities or capabilities for making actual decisions and Google's Deep Learning project is based on pattern recognition algorithms that where researched for decades already but still remain just that: pattern recognition. There is currently no actual research that gives any promise of 'artificial neural networks' (as used by Google and others) ever being capable of supporting higher orders of reasoning, let alone something like 'machine consciousness'.

The other big problem with 'fear for AI' is the simple fact that people tend to distrust that what they don't understand. And as soon as something (or someone) is more intelligent then we are, we see that as a thread instead of an opportunity (to learn). Hollywood has helped to strengthen this idea by linking the concept of 'highly intelligent' predominantly to the villain in the story: the evil genius. This predates the whole 'AI is dangerous' debate by decades or even more. So now the 'genius' is a machine and therefore a 'genius machine' must of course be evil. The actual reality, as we currently can observe in humans and is documented in many scientific studies, is that highly intelligent people tend to also have a higher sense of fairness and even justice, have a much more compassionate view on society and are driven to think about solutions rather then 'creating problems'. The logical expectation for Artificial Super Intelligence is for it to be able to think about anything and everything in much more detail and consideration, simply because that capacity goes hand in hand with 'having higher levels of intelligence'.

Finally, I don't really understand the positions and statements of people like the ones mentioned above. They are known for their intelligence, business acumen and in some cases scientific insights, but clearly fail to put current AI-developments into a sensible scope and time-frame and obviously don't seem to understand that 'intelligence' is not an isolated feat. Because the only way that we can achieve human-level Artificial Intelligence is to incorporate everything else that makes us human.

Back to index

News categories

Our news spans several categories. To help you navigate our news catalog, you can find specific news-sections for each category below.

  • All news

Weblogs

Below you can filter the news list to find personal weblog entries.

©2010-2015 MIND|CONSTRUCT - All rights reserved @Google+