Why Conscious A.I. is Near

January 1, 2021 Artifical Intelligence, Consciousness No Comments

Without pinning a date, it’s dangerous that many researchers/developers are making progress in many aspects of A.I. without deep insight into consciousness.

Scary?

‘Near’ in the title is meant relative. The issue is the following. The ways are such, and the competition is such that I don’t see any other option than that we are speeding head-on towards the moment of insight ― AI’s and ours.

Not within the present-day technology of what is called ‘A.I.’ with little intelligence inside. Actually, in this setting, it’s probably not going to happen, ever.

But finding other roads of mental transportation, traveling in another landscape, combining this with that, what is being developed will undoubtedly play a significant role. Even if only abstractly, it brings the real thing closer, as a wave tumbling down on itself.

At issue is the non-transparency, the conceptual blindness, the edge that is only some vertical line on one’s path until it is reached. Then it’s an additional part of town that may be very different.

Mainly, the striving for integration of implicit and explicit processing in A.I.

This will bring many advantages. The aim is a much more flexible artificial entity that is applicable in a broad range of situations. Today’s applications are like calculators: very powerful in very narrow fields. Integration is missing, therefore, intelligence.

Somewhat technically: The aim brings distributed representations, parallel processing, automatic and systematic generalization, semantic grounding, efficient learning and uncertainty handling, self-explainability, causal and other ways of reasoning, high modularity and flexibility, etc.

All that is very cool stuff, suitable for many exciting applications and automation

and autonomation.

Yep, not only intelligence, but consciousness on a plate.

Again, dangerous in this is the blindness

that I see being pervasive in A.I. engineers, not towards their engineering but towards themselves as human beings. Psychology of depth is not in their curriculum.

Unfortunately, the next level of blindness lies in present-day psychology and psychologists. This is a Western (and I’m not sure about Eastern) historical issue. For instance, it makes us quite blind towards what is not functioning in psychotherapies and why. [see: “Psychotherapy vs. Psychotherapies”]

This is just an example. The big issue is, in my view, the basic striving of mere-ego – being ‘consciousness,’ in a way – to be exorbitantly dominant, most of all ‘in the own house.’ The whole human species is in a critical stage in this regard. [see: “Three Waves of Attention”]

Please don’t throw me under the bus before giving it some deep thought.

The consequence is that it is (still) very hard to see the beloved ‘consciousness’ as something that isn’t so much in control as one would like. It is – as in a famous image – as the rider of the elephant who thinks to control the elephant, but it’s the other way around.

In other words, the real consciousness is not the rider.

Meanwhile, we are developing parts of another elephant-and-rider.

Ironically, we are making the same mistake a second time.

It may be the last time.

My advice is to make the elephant a Compassionate one from the start on.

[see: “The Journey Towards Compassionate AI.”]

Leave a Reply

Related Posts

Reinforcement Learning & Compassionate A.I.

This is rather abstract. There is an agent with a goal, a sensor, and an actor. Occasionally, the agent uses a model of the environment. There are rewards and one or more value functions that value the rewards. Maximizing the goal (through acting) based on rewards (through sensing) is reinforcement learning (R.L.). The agent’s policy Read the full article…

At the Brink of Robotics?

Soon enough, we will see a revolution in robotics development on the scale of the present and pending A.I.-in-knowledge revolution. Together, these revolutions will bring eu-topia or dystopia. We don’t know, but we should not remain idle. Pretty much the same basic technologies This is: apart from some translation that makes the analogies less obvious Read the full article…

Is All Learning Associational?

Most probably. This is a domain where animal/human learning and A.I. learning can learn much from each other. Three forms of learning in A.I. Generally, learning in A.I. is divided into two distinct kinds, with a third one dangling in the appendix, referring to another book (*). supervised learning: training with specific input and labeled Read the full article…

Translate »