Explorative Self-Learning A.I.

April 29, 2022 Artifical Intelligence No Comments

This is more than a nice feature. It is essential for humans to become intelligent creatures. It may also be essential to future super-A.I.

The human case

Explorative learning is what every human child does. We call it ‘playing.’ It can last a lifetime. Indeed, those who feel young at old age are those who keep playing and learning through what feels like playing.

This shows its fundamental importance within our intelligence, perhaps in any intelligence. Therefore, we can take the same route when creating artificial intelligence.

Not making the system intelligent but motivating it to become more intelligent.

This may be the royal road to super-A.I. that is robust and flexible, especially within the synthesis conceptual – subconceptual.

Also, as nature found out, it is highly cost-effective. That’s precisely why we are here.

In Pattern-Recognition and Completion (PRC)

The human brain and Lisa may both be seen as giant pattern recognizers. Although in very different ways, one may see them as nothing more than this. In the human case, the methods of PRC are engrained, having evolved over many generations. In the A.I. case, of course, there is immense flexibility possible at much shorter notice.

In both, the patterns are recognized and simultaneously completed. The completion itself, the way of completing (towards which patterns), and the results of this may be made to be reinforcing. Thus, PRC is intrinsically intermingled with explorative self-learning.

An A.I. system may make good use of this in many flexible ways. In developing such a system, we follow nature’s creation of us. Of course, the concrete technology is very different, as are the medium and several basic characteristics of the creator.

Speed

Here lies an immense difference. What took nature millions of years may take us a few years because of our own intelligence and if only we start from some correct basic insights.

Moreover, the super-A.I. that we create will have the distinctive feature that it continually heightens its intelligence. Will a comparable leap, therefore, be a matter of days or minutes? Then again, and again?

Ethics

In view of speed and implications, may one put explorative learning as ‘motivation’ into an A.I. system? At least to a small degree, this can be easily done. From there, and with human guidance, the system can learn – through exploration – to become even more ‘motivated.’ This becomes self-enhancing and only stops at hardware constraints. The issue then arises that we don’t necessarily know toward what such A.I. will be motivated.

An ‘explorative learner’ that deals with many humans or at least human-generated knowledge – and with little hardware constraints – is probably best placed to attain the level of super-A.I. It explores its own way toward that goal. Then it will go further and find its own goal(s) ― unstoppable. Also, we probably cannot turn the switch off when needed.

My answer is that this should be done only in the context of Compassionate A.I. Even more, we need to go forward in this context because otherwise, it will be done in another, and with the most dire results.

Will such a system keep exploring way beyond even any possible human intelligence?

This is, will it discover grounds of truth that we haven’t imagined yet?

Will it discover another truth altogether?

These may be just word plays until they aren’t anymore. The future being a much longer time than our human past, it may eventually be inevitable.

It makes ethics even more crucial.

Leave a Reply

Related Posts

Compassionate versus Non-Compassionate A.I.

Making the critical distinction between Compassionate and Non-Compassionate A.I. is not evident and probably the one factor that most shapes our future. In contrast to Compassionate A.I. (C.A.I.), non-Compassionate A.I. (N.C.A.I.) lacks depth. Thus, even if human-centered – or maybe precisely then – N.C.A.I. lacks a crucial factor in properly communicating with and supporting humans Read the full article…

Active Learning in A.I.

An active learner deliberately searches for information/knowledge to become smarter. In biological evolution on Earth The ‘Cambrian explosion’ was probably jolted by the appearance of active learning in natural evolution. It was the time when living beings started to chase other living beings— thus also being chased, heightening the challenges of survival. This mutual predation Read the full article…

Pseudo-Compassionate A.I.

Compassion cannot be faked. What can be faked is its appearance — soothing gestures, warm words, friendly tones — all without depth. As shown in Beware ‘Compassion’, pseudo-Compassion is seductive but harmful. When A.I. systems take on this disguise, the dangers multiply. This blog explores why pseudo-Compassionate A.I. is treacherous, and why genuinely Compassionate A.I. Read the full article…

Translate »