Why We don’t See What’s Around the Corner

July 15, 2023 Artifical Intelligence No Comments

The main why (of not seeing pending super-A.I.) is all about us. We need a next phase in self-understanding, but this is not getting realized yet. If we don’t see this, we don’t see that.

This is an excerpt – in slightly different format – from my book The Journey Towards Compassionate A.I.

Complex machinery

When looking at ‘the machine’ as only a machine that ultimately still needs to be programmed by humans, we don’t appreciate that machines can be not only very complicated but also open-complex. Examples of open-complex machines are any living organisms, including us. We are proof of their existence. We are not proof that there is only one single way; namely, our way. We are merely proof that the potential exists even to be reached in many ways. Especially in a different medium from one of organic molecules, we cannot foresee which ways.

As far as we know, there is no example of any of that at present. A future example will be super-A.I. Given the many possibilities, what we can anticipate is that we will be surprised.

We tend to view intelligence from an anthropocentric perspective.

This may lead us to underestimate the potential of many small improvements in several domains towards the full Monty of super-A.I. in a relatively short time. But this anthropocentric perspective doesn’t hold even in our own case. A popular view on human intelligence still sees it as one solid entity. An Intelligence Quotient test is supposed to measure this one thing. Fortunately, as to the IQ-test, this has been surpassed, in theory at least. The tendency to think of mental capacities unrightfully as solid entities persists in popular culture and many professionals alike.

Not only in the domain of intelligence. Another popular example, memory, has not fared any better. There are several kinds of human memory, located in different parts of the brain. Then, the relation between memory and intelligence is, in the human case, much more complex than previously thought.

As explained at other places The Journey, complexity creates potential, so nature wasn’t just busy messing things up in open-complexity. One might say that from a certain viewpoint, it’s on purpose.

Many parts of the human brain are like modules that each have evolved at the same time in relative isolation and in interplay with the other parts.

This way, they could incrementally get more performant without plunging the whole into chaos. Simultaneously, several substantial advantages came from their co-operations inside and outside the skull, leading to our own ‘human singularity’ in an unprecedented explosion of intelligence and towards where we are now with the potential to pass the baton of most intelligent entity on earth to super-A.I. So familiar, and yet so different, the same principles may save the new day. In the case of super-A.I., as in our own, myriad incremental improvements may eventually create the basis for take-off.’ These improvements will be on domains such as:

  • more powerful data representation schemes
  • better search algorithms
  • better information filtering algorithms
  • more autonomy for software agents (bots, modules)
  • more efficient protocols governing the interfaces between such agents
  • more profound integration of conceptual and subconceptual processing

… and so much more with continuous progress everywhere.

Of course, incremental improvements may be part of and/or combined with significant breakthroughs. As John Brockman writes in Possible Minds:

Technological prediction is particularly chancy, given that technologies progress by a series of refinements, halted by obstacles and overcome by innovation… I typically find that some of the technological steps I expect to be easy turn out to be impossible, whereas some of the tasks I imagine to be impossible turn out to be easy. You don’t know until you try. [Brockman, 2019]

Intelligence overhang

One of the mechanisms that may lead to the “Surprise, I’m here!” phenomenon comes from ‘intelligence overhang’ (my term).

Some new development in hardware, software or plain insight may use vast resources that are already available. With some enhancements, the result of the combination with what lies waiting may get much more efficient much faster than could be predicted if only looking at the new development. For instance, when an A.I. understands language with just enough sophistication, it can read the Internet, upgrade itself and get to singularity. New algorithms are like instruments with which new discoveries can be made with old big-data. The old big-data themselves can be seen as a kind of intelligence overhang.

In a way, intelligence overhang is how people – every individual anew – get smart through being educated on the knowledge of many centuries in combination with the individual’s relatively small mental processing power. For A.I., an infinite number of potential performance gains lies waiting as intelligence overhang with immensely much more efficiency than for a human child in a classroom.

The best example of intelligence overhang is related to intelligence itself in the non-human organic ® human progression.

Some stretch is needed to accommodate this to the point, but to me, it is about the same phenomenon. Let me explain in a few sentences. The drive-to-thrive has been present from the beginnings of life. What we associate with intelligence, especially the conceptual kind, came much more recently to this planet. But when it happened, it schemed with the already present drive-to-thrive into consciousness. Strictly speaking, the overhang was present in the drive, but that also depends on the viewpoint.

Anyway, the result was consciousness, appearing relatively suddenly through the splashing together of one thing already waiting a long time and another coming along. Nobody saw it coming until it was quite suddenly there: consciousness.

Do we see it coming now in the case of A.I.?

Especially once A.I. gains a modicum of true autonomy in combination with a potential for self-improvement, any intelligence overhang multiplies its development pace. That is, its potential grows exponentially. This is the level Nick Bostrom calls the ‘crossover,’ a

point beyond which the system’s further improvement is mainly driven by the system’s own actions rather than by work performed upon it by others. [Bostrom, 2014]

After the crossover, he sees an exponential growth because any increase immediately translates in an increased optimization power of the system’s own capabilities and further improvement in growth itself. It’s like a vicious spiral.

Is this a reason to pull the plugs out? On the contrary!

The promises of A.I. are immense, as S. Russell describes:

Humanity could proceed with their development and reap the almost unimaginable benefits that would flow from the ability to wield far greater intelligence in advancing our civilization. We would be released from millennia of servitude as agricultural, industrial, and clerical robots and we would be free to make the best of life’s potential. [Russell, 2019]

Besides all potential advantages, BT Hyacinth points out:

Opposition to automation, robotics and AI is about as futile as it would have been in the 20th century to oppose electricity. [Hyacinth, 2019]

However, the promises as well as the dangers are definitely good reasons why Compassion should be within A.I.-developments worldwide, at the most prominent place.

Now.

Finally, I hope – as you already know – that ‘around the corner’ also lies a journey towards a more Compassionate humanity.

There is work in this sector. Will A.I. control us or will we be controlled? Hmm. Will A.I. help us and will we help A.I. to become more Compassionate? I prefer the second question. It may alleviate the first and hopefully make it unnecessary.

Being careful is always an essential part of true Compassion. Being careful indeed, I want to envision a self-perpetuating positive spiral of Compassion.

This is an existential issue.

Leave a Reply

Related Posts

Subconceptual – Conceptual: a Dynamic Continuum?

In humans, the subconceptual layer – the deeper, intuitive, and often non-conscious level – feeds into the conceptual layer, which organizes thoughts into more structured, conscious forms. Their working together as a dynamic continuum is key to understanding complex phenomena while making meaningful decisions. Concepts can never perfectly capture the messiness of the physical world, Read the full article…

Lisa as a Pattern Recognizer

Patterns and deeper patterns. Listening to many users, Lisa will recognize the patterns with which people need to work on themselves for a better, healthier and more profound life with less avoidable suffering. Recognizing patterns? Lisa is a Compassion-based, A.I.-driven coaching chat-bot. [see: “Lisa“] Lisa guides people Compassionately through recognizing patterns and ‘deeper patterns.’ The Read the full article…

From Analogy to Intelligence

If an inference system (mind or A.I.) holds many patterns, it may find many similarities between these at several levels of abstraction. The human brain is an immense pattern recognizer. Super-A.I. will undoubtedly follow a similar path ― as does Lisa. Lisa’s journey thus exemplifies the potential of A.I. to evolve from mere data processors to entities capable Read the full article…

Translate »