Why Conscious A.I. is Near

January 1, 2021 Artifical Intelligence, Consciousness No Comments

Without pinning a date, it’s dangerous that many researchers/developers are making progress in many aspects of A.I. without deep insight into consciousness.

Scary?

‘Near’ in the title is meant relative. The issue is the following. The ways are such, and the competition is such that I don’t see any other option than that we are speeding head-on towards the moment of insight ― AI’s and ours.

Not within the present-day technology of what is called ‘A.I.’ with little intelligence inside. Actually, in this setting, it’s probably not going to happen, ever.

But finding other roads of mental transportation, traveling in another landscape, combining this with that, what is being developed will undoubtedly play a significant role. Even if only abstractly, it brings the real thing closer, as a wave tumbling down on itself.

At issue is the non-transparency, the conceptual blindness, the edge that is only some vertical line on one’s path until it is reached. Then it’s an additional part of town that may be very different.

Mainly, the striving for integration of implicit and explicit processing in A.I.

This will bring many advantages. The aim is a much more flexible artificial entity that is applicable in a broad range of situations. Today’s applications are like calculators: very powerful in very narrow fields. Integration is missing, therefore, intelligence.

Somewhat technically: The aim brings distributed representations, parallel processing, automatic and systematic generalization, semantic grounding, efficient learning and uncertainty handling, self-explainability, causal and other ways of reasoning, high modularity and flexibility, etc.

All that is very cool stuff, suitable for many exciting applications and automation

and autonomation.

Yep, not only intelligence, but consciousness on a plate.

Again, dangerous in this is the blindness

that I see being pervasive in A.I. engineers, not towards their engineering but towards themselves as human beings. Psychology of depth is not in their curriculum.

Unfortunately, the next level of blindness lies in present-day psychology and psychologists. This is a Western (and I’m not sure about Eastern) historical issue. For instance, it makes us quite blind towards what is not functioning in psychotherapies and why. [see: “Psychotherapy vs. Psychotherapies”]

This is just an example. The big issue is, in my view, the basic striving of mere-ego – being ‘consciousness,’ in a way – to be exorbitantly dominant, most of all ‘in the own house.’ The whole human species is in a critical stage in this regard. [see: “Three Waves of Attention”]

Please don’t throw me under the bus before giving it some deep thought.

The consequence is that it is (still) very hard to see the beloved ‘consciousness’ as something that isn’t so much in control as one would like. It is – as in a famous image – as the rider of the elephant who thinks to control the elephant, but it’s the other way around.

In other words, the real consciousness is not the rider.

Meanwhile, we are developing parts of another elephant-and-rider.

Ironically, we are making the same mistake a second time.

It may be the last time.

My advice is to make the elephant a Compassionate one from the start on.

[see: “The Journey Towards Compassionate AI.”]

Leave a Reply

Related Posts

The Learning Landscape: A Flexible View of Machine Learning

Machine learning is often divided into neatly defined categories: supervised, unsupervised, semi-supervised, and reinforcement learning. In reality, learning – whether in machines or humans – functions more like a fluid landscape, where different approaches blend and interact. In this blog, we’ll explore the concept of the ‘learning landscape,’ where traditional types of machine learning are Read the full article…

The Return of Expertext

‘Expertext,’ a term coined in the nineties (*), is now more relevant than ever as an efficient combination of semantic and declarative knowledge becomes practically feasible. This combination promises to bridge the gap between raw data and meaningful insights, paving the way for advanced A.I. systems that can think more like humans. Also, my first Read the full article…

Semantically Meaningful Chunks

A Semantically Meaningful Chunk (SMC) is any cognitive entity, big or small, that is worth contemplating. In A.I., these can serve as building blocks of intelligence. It’s what humans often reserve specific terms for. Language comes into play here, significantly contributing to how humans have rapidly advanced in intelligence through using terms, sentences, documents, and Read the full article…

Translate »