Why Conscious A.I. is Near

January 1, 2021 Artifical Intelligence, Consciousness No Comments

Without pinning a date, it’s dangerous that many researchers/developers are making progress in many aspects of A.I. without deep insight into consciousness.

Scary?

‘Near’ in the title is meant relative. The issue is the following. The ways are such, and the competition is such that I don’t see any other option than that we are speeding head-on towards the moment of insight ― AI’s and ours.

Not within the present-day technology of what is called ‘A.I.’ with little intelligence inside. Actually, in this setting, it’s probably not going to happen, ever.

But finding other roads of mental transportation, traveling in another landscape, combining this with that, what is being developed will undoubtedly play a significant role. Even if only abstractly, it brings the real thing closer, as a wave tumbling down on itself.

At issue is the non-transparency, the conceptual blindness, the edge that is only some vertical line on one’s path until it is reached. Then it’s an additional part of town that may be very different.

Mainly, the striving for integration of implicit and explicit processing in A.I.

This will bring many advantages. The aim is a much more flexible artificial entity that is applicable in a broad range of situations. Today’s applications are like calculators: very powerful in very narrow fields. Integration is missing, therefore, intelligence.

Somewhat technically: The aim brings distributed representations, parallel processing, automatic and systematic generalization, semantic grounding, efficient learning and uncertainty handling, self-explainability, causal and other ways of reasoning, high modularity and flexibility, etc.

All that is very cool stuff, suitable for many exciting applications and automation

and autonomation.

Yep, not only intelligence, but consciousness on a plate.

Again, dangerous in this is the blindness

that I see being pervasive in A.I. engineers, not towards their engineering but towards themselves as human beings. Psychology of depth is not in their curriculum.

Unfortunately, the next level of blindness lies in present-day psychology and psychologists. This is a Western (and I’m not sure about Eastern) historical issue. For instance, it makes us quite blind towards what is not functioning in psychotherapies and why. [see: “Psychotherapy vs. Psychotherapies”]

This is just an example. The big issue is, in my view, the basic striving of mere-ego – being ‘consciousness,’ in a way – to be exorbitantly dominant, most of all ‘in the own house.’ The whole human species is in a critical stage in this regard. [see: “Three Waves of Attention”]

Please don’t throw me under the bus before giving it some deep thought.

The consequence is that it is (still) very hard to see the beloved ‘consciousness’ as something that isn’t so much in control as one would like. It is – as in a famous image – as the rider of the elephant who thinks to control the elephant, but it’s the other way around.

In other words, the real consciousness is not the rider.

Meanwhile, we are developing parts of another elephant-and-rider.

Ironically, we are making the same mistake a second time.

It may be the last time.

My advice is to make the elephant a Compassionate one from the start on.

[see: “The Journey Towards Compassionate AI.”]

Leave a Reply

Related Posts

The Path from Implicit to Explicit Knowledge

Implicit: It’s there, but we don’t readily know how, neither why it works. Explicit: We can readily follow each step. This is more or less the same move as from intractable to tractable or from competence to comprehension. But how? Emergence If something comes out, it must have been in ― one way or another. Read the full article…

A.I. is a Flow, not a Technology

It’s not even a set of technologies. A.I. is a flow of ever-new technologies and combinations of the same. The flow itself is crucial. —This is so at present and will be in the future. Containing any technology does not contain the flow. The latter is far more challenging. If one tries to contain the Read the full article…

Issues of Internal Representation in A.I.

This is likely the most challenging aspect of developing the conceptual layer for any super-A.I. system, especially considering the complexity of reality and the fluid nature of concepts. Representing conceptual information requires an approach that honors cognitive flexibility, contextual awareness, and adaptability. The model should allow for representational fluidity while maintaining enough structure to be Read the full article…

Translate »