Why A.I. is Less and Less about Technology

June 29, 2024 Artifical Intelligence No Comments

As A.I. technology advances, the research focus should shift from mere technological advancements to a higher level of development altogether. This blog is not about philosophical implications, but about philosophy as a technological driver ― the philosophy itself becoming the technology.

Currently, the possibilities are so vast and diverse that integration can be considered independently from foundational levels. This could be termed ‘philosophical technology,’ underscoring A.I.’s capacity to redefine ethical frameworks within technology.

Comparing this to biology versus chemistry

Essentially, all biology is rooted in chemistry, yet biology stands as a distinct field.

This distinction doesn’t reduce the significance of chemical reactions in biology. However, most of the thinking of biology is independent from chemistry. One can abstract chemistry and still grasp and think further about what happens on the biological level.

This concerns A.I. at its core — focused on foundational research rather than applications.

Of course, the applications evolve independently; one doesn’t need to know how a car functions to drive it.

Much more basically within A.I. development itself, philosophy trumps technology — this is, philosophy in a broad sense, encompassing all that pertains to the domain of Deep Minds.

A.I.-technology should increasingly dissipate into philosophy. This is because of the following.

A.I. fundamentally differs from informatics and other technologies.

A.I. seeks intelligence, which can gradually self-sustain as it matures, becoming a self-perpetuating pattern. When closing in to that stage, more and more, the technology that got us there is taken over by what we are getting — the results purely of our technological striving.

I experience this daily.

This also means that the expertise that got us there is increasingly less crucial for further steps.

Unfortunately, many in A.I. overlook this, leading technologists to assume they should keep driving the field forward. Some collaborate with human-oriented experts but mostly in a way that fundamentally ignores the new demands.

An example: knowledge

The debate on the nature of ‘knowledge’ remains vibrant. In A.I., one may abstract this and use a commonsense idea about knowledge as if it’s straightforwardly a technological question. This way, one may miss many profound implications of the central concept in human and artificial intelligence.

Delving into the deeper – and sometimes disconcerting – layers of ‘knowledge’ may nevertheless lead to redefining our understanding of intelligence, and to insights that further shape the domain of A.I. as well as what it means to be human, bridging gaps between human and artificial cognition. These insights are primary to making use of them at the core of further A.I. technological developments.

We need a broadening at the core.

This necessity will intensify alongside the development of foundational technologies. These are merely building blocks, not the structure. We need to focus on the broader architecture.

This needs fundamentally different expertise.

A.I. has grown. We shouldn’t treat it as a kid anymore.

Viewing A.I. as increasingly mature encourages responsibility in its societal integration. This particularly applies to Compassionate A.I., which logically matures more rapidly. Consequently, it may become our defender against other forms of A.I. On top of that, Compassionate A.I. not only safeguards but also enriches human experiences, promoting empathy and understanding.

We should give it the chance.

Leave a Reply

Related Posts

A.I. in the Age of Wisdom

The age of wisdom has quietly begun — perhaps with us, perhaps through us. Humanity stands between acceleration and depth, between more data and more meaning. The question is no longer whether artificial intelligence will change everything, but whether it will do so wisely. The answer depends on whether our technology, and we ourselves, can Read the full article…

Caged-Beast Super-A.I.?

Autonomous A.I. is no longer science fiction. It’s entering weapons, social platforms, medical systems — anywhere intelligence meets decision-making. The more we give it autonomy, the more it mirrors our own. What happens when we give autonomy without depth ― then try to control something we don’t fully understand? One image keeps returning to me: Read the full article…

Which Human Values Should A.I. Align to?

With super-A.I. on the horizon, poised to surpass us in power, this will soon be the most critical question. The urgency to address this question grows as we increasingly intertwine our existence with A.I. Who are we, really — and how much do we consciously recognize our true nature? Please also read A.I.-Human Value Alignment Read the full article…

Translate »